Block Wiedemann Algorithm
   HOME

TheInfoList



OR:

The block Wiedemann algorithm for computing
kernel Kernel may refer to: Computing * Kernel (operating system), the central component of most operating systems * Kernel (image processing), a matrix used for image convolution * Compute kernel, in GPGPU programming * Kernel method, in machine learnin ...
vectors of a
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
over a
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
is a generalization by
Don Coppersmith Don Coppersmith (born 1950) is a cryptographer and mathematician. He was involved in the design of the Data Encryption Standard block cipher at IBM, particularly the design of the S-boxes, strengthening them against differential cryptanalysis ...
of an algorithm due to Doug Wiedemann.


Wiedemann's algorithm

Let M be an n\times n
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
over some
finite field In mathematics, a finite field or Galois field (so-named in honor of Évariste Galois) is a field that contains a finite number of elements. As with any field, a finite field is a set on which the operations of multiplication, addition, subtr ...
F, let x_ be a random vector of length n, and let x = M x_. Consider the sequence of vectors S = \left , Mx, M^2x, \ldots\right/math> obtained by repeatedly multiplying the vector by the matrix M; let y be any other vector of length n, and consider the sequence of finite-field elements S_y = \left \cdot x, y \cdot Mx, y \cdot M^2x \ldots\right/math> We know that the matrix M has a minimal polynomial; by the
Cayley–Hamilton theorem In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies it ...
we know that this polynomial is of degree (which we will call n_0) no more than n. Say \sum_^ p_rM^r = 0. Then \sum_^ y \cdot (p_r (M^r x)) = 0; so the minimal polynomial of the matrix annihilates the sequence S and hence S_y. But the
Berlekamp–Massey algorithm The Berlekamp–Massey algorithm is an algorithm that will find the shortest linear-feedback shift register (LFSR) for a given binary output sequence. The algorithm will also find the minimal polynomial of a linearly recurrent sequence in an arbi ...
allows us to calculate relatively efficiently some sequence q_0 \ldots q_L with \sum_^L q_i S_y[]=0 \;\forall \; r. Our hope is that this sequence, which by construction annihilates y \cdot S, actually annihilates S; so we have \sum_^L q_i M^i x = 0. We then take advantage of the initial definition of x to say M \sum_^L q_i M^i x_ = 0 and so \sum_^L q_i M^i x_ is a hopefully non-zero kernel vector of M.


The block Wiedemann (or Coppersmith-Wiedemann) algorithm

The natural implementation of sparse matrix arithmetic on a computer makes it easy to compute the sequence ''S'' in parallel for a number of vectors equal to the width of a machine word – indeed, it will normally take no longer to compute for that many vectors than for one. If you have several processors, you can compute the sequence S for a different set of random vectors in parallel on all the computers. It turns out, by a generalization of the Berlekamp–Massey algorithm to provide a sequence of small matrices, that you can take the sequence produced for a large number of vectors and generate a kernel vector of the original large matrix. You need to compute y_i \cdot M^t x_j for some i = 0 \ldots i_\max, j=0 \ldots j_\max, t = 0 \ldots t_\max where i_\max, j_\max, t_\max need to satisfy t_\max > \frac + \frac + O(1) and y_i are a series of vectors of length n; but in practice you can take y_i as a sequence of unit vectors and simply write out the first i_\max entries in your vectors at each time ''t''.


References

Wiedemann, D., "Solving sparse linear equations over finite fields," IEEE Trans. Inf. Theory IT-32, pp. 54-62, 1986. D. Coppersmith, Solving homogeneous linear equations over GF(2) via block Wiedemann algorithm, Math. Comp. 62 (1994), 333-350. Villard's 1997 research report
A study of Coppersmith's block Wiedemann algorithm using matrix polynomials
(the cover material is in French but the content in English) is a reasonable description. Thomé's paper
Subquadratic computation of vector generating polynomials and improvement of the block Wiedemann algorithm
uses a more sophisticated
FFT A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in the ...
-based algorithm for computing the vector generating polynomials, and describes a practical implementation with ''i''max = ''j''max = 4 used to compute a kernel vector of a 484603×484603 matrix of entries modulo 2607−1, and hence to compute discrete logarithms in the field ''GF''(2607). {{Numerical linear algebra Numerical linear algebra